Overview of CLEF 2009 INFILE track
نویسندگان
چکیده
The INFILE@CLEF 2009 track is the second run of this track on the evaluation of cross-language adaptive filtering systems. It uses the same corpus as the 2008 track, composed of 300,000 newswires from Agence France Presse (AFP) in three languages: Arabic, English and French, and a set of 50 topics in general and specific domain (scientific and technological information). We proposed this year two tasks : a batch filtering task and an interactive task to test adaptive methods. Results for the two tasks are presented in this paper.
منابع مشابه
SINAI at INFILE 2009: Experiments with Google News
This paper describes the SINAI team participation in the INFILE routing and filtering track of the CLEF campaign. This is the first participation of the SINAI research group in the INFILE task. We have participated in the batch filtering subtask and submitted two experiments: one using the topics’ text as learning data to train a classifier, and another one where training data has been construc...
متن کاملUAIC: Participation in INFILE@CLEF Task
This year marked UAIC 1 ’s first participation at the INFILE@CLEF competition. This campaign’s purpose is the evaluation of cross-language adaptive filtering systems, which is to successfully build an automated system that separates relevant from non-relevant documents written in different languages in an incoming stream of textual information with respect to a given profile. A brief descriptio...
متن کاملBatch Document Filtering Using Nearest Neighbor Algorithm
This paper describes the participation of LIG lab, in the batch filtering task for the INFILE (INformation FILtering Evaluation) campaign of CLEF 2009. As opposed to the online task, where the server provides the documents one by one, all of the documents are provided beforehand in the batch task, which explains the fact that feedback is not possible in the batch task. We propose in this paper ...
متن کاملCLEF 2009 Ad Hoc Track Overview: TEL and Persian Tasks
The 2009 Ad Hoc track was to a large extent a repetition of last year’s track, with the same three tasks: Tel@CLEF, Persian@CLEF, and Robust-WSD. In this first of the two track overviews, we describe the objectives and results of the TEL and Persian tasks and provide some statistical analyses.
متن کامل